专利摘要:
SYSTEM AND METHOD FOR SIMULATING MAKEUP ON PORTABLE DEVICES EQUIPPED WITH DIGITAL CAMERA. The present invention relates to a system and method capable of performing virtual makeup on images obtained by means of portable devices equipped with digital cameras. This invention presents a method capable of automatically detecting points of interest (eyes, mouth, eyebrow, face contour) in the image of the user's face, and allowing the user to perform the application. makeup through your fingers on a touchscreen. From these two achievements, another algorithm was created to prevent the "smudging" of the makeup in its application from the user's fingers. With these features, the system created for the application of virtual makeup allows the creation of makeup faces with a high degree of precision. To perform virtual makeup, methods were created for: automatic detection of points of interest in the facial image, application of transparency in the image in a way that is similar to makeup, and restriction of the area of application of makeup within the region where the interest. The objective of the proposed system, in this invention, is to create a way to test the application (...).
公开号:BR102012033722B1
申请号:R102012033722-3
申请日:2012-12-28
公开日:2020-12-01
发明作者:Eduardo Telmo Fonseca Santos;Eduardo Manuel De Freitas Jorge;Vanessa Santos Medina;Fernando Campos Martins;Ana Lucia Pereira;Gustavo DE ALMEIDA NEVES;Luciano Rebouças De Oliveira
申请人:Samsung Eletrônica da Amazônia Ltda.;
IPC主号:
专利说明:

Field of invention
The present invention is related to the field of human interaction with portable devices, more specifically to the interaction made through the processing of images stored or captured through the camera integrated in portable devices, in particular, cell phones, smartphones, Personal Digital Assistants (PDAs), portable digital cameras, among others.
Image processing techniques implemented in a makeup computer simulation system are used to blend graphic elements with stored or captured images. These techniques make it possible, for example, to simulate different types of makeup applied to eyelids, lips, cheekbones (region over zygomatic bones), eyebrows, eye and mouth contours and other regions of the face with control over intensity and positioning, which can be carried out manually, semi-automatically or automatically.
The use of the image of the user obtained with the camera of the portable device can be changed on the screen of the portable device with the application of different types of makeup and / or other effects or graphic elements with techniques of transparency, overlay and / or filtering. The detection of the face and the contours of its regions is carried out automatically, and can be subsequently edited by the user, as well as the colors, intensities and patterns of makeup.
The decoding system materialized for the present invention was developed to be executed in a portable device, having been necessary to improve the efficiency of the methods and to adjust the procedures to the restrictions of space in memory and computational power of the portable devices. Seeking to make the system efficient and easy to use, an interface was used in which the user can use her hands to interact directly or indirectly with the makeup simulation system. In this way, an integrated portable system was obtained, materialized in hardware and software, which allows the simulation of makeup over static or dynamic images, captured with a camera or stored. State of the art
Makeup is a means of beautification, of aesthetic differentiation, which instigates the imagination of men and women, and which has its largest consumer in the female audience, feeding the cosmetics industry and occupying part of the time in women's daily lives .
Much has evolved in the last decades in the area of face detection. One of the first methods for finding faces quickly in an image can be found in Viola and M. Jones, "Rapid Object Detection Using a Boosted Cascade of Simple Features", pp. 511—518, 2001. In that document, Haar wavelet characteristics, analyzed with boosting classifiers, are used in sliding windows, after previous training, to find objects of interest. In S. Brubaker; J. Wu; J. Sun; M. Mullin; J Rehg, "On the Design of Cascades of Boosted Ensembles for Face Detection", International Journal on Computer Vision pp. 65-86, 2008, a more detailed study on the use of the method presented by Viola et al is carried out with the objective of designing good classifiers of parts of the face.
Among the needs for face detection in images, the use of points of interest is of fundamental importance to the documents S. Milborrow, F. Nicolls, "Locating Facial Features with an Extended Active Shape Model", European Conference on Computer Vision, pp. 504-513, 2008 and Liya Ding; Martinez, AM, "Precise detailed detection of faces and facial features", IEEE International Conference on Computer Vision and Pattern Recognition, vol., Pp.1-7, 2008. With such points, it is possible to recognize parts of the face such as eyes, mouth, cheekbones (cheekbones), allowing applications from augmented reality to that suggested in the present invention for application of virtual makeup. For example, from the teachings of Liya Ding et al, an extended active model (MFA) is used to detect landmarks in face images; MFA models use shape correspondence through distance metrics in order to recognize each shape obtained from the landmarks grouping; with that, it is possible to track the points of interest in the image and detect its bordering regions. Methods such as Subclass Discriminant Analysis and Gaussian Models are used to robustly detect each part of a face image.
Patent document US6293284B1, entitled: VIRTUAL MAKE-OVER, published on September 25, 2001, describes a method and apparatus for the evaluation of cosmetics in a virtual facial image; from an image of the user, an assessment of the individual's skin tone is made in order to determine the best makeup colors. Taking into account the skin tone, it is possible, therefore, to suggest color palettes of lipsticks, blushes and other types of makeup that most suit the individual. No description is given about the methods used by the system for its realization,
The US patent document US6502583B1 entitled: METHOD OF CORRECTING FACE IMAGE, MAKEUP SIMULATION METHOD, MAKEUP METHOD, MAKEUP SUPPORTING DEVICE AND FOUNDATION TRANSFER FILM, published in January 2006, concerns a system for simulating makeup from images facial; regions of interest, such as eyebrows, eyes, nose and mouth, are chosen for the application of virtual makeup; nothing is described about the methods used to apply makeup to the facial image.
Patent document EP1975870A1 entitled: MAKEUP SIMULATION SYSTEM, MAKEUP SIMULATION DEVICE, MAKEUP SIMULATION METHOD, AND MAKEUP SIMULATION PROGRAM, published on January 17, 2006, presents various types of makeup that are simulated in a facial image captured by a camera; according to the teachings of the referred document, a device is created to capture the image of the face, and through a hardware and software interface to be able to compose the makeup in the captured photo, printing later, if requested; the points of the face are extracted automatically, with such a process starting from the alignment of each region (mouth, eyebrow, nose, cheekbones, etc.) and later with the correspondence of shape of each region and segmentation of each region to makeup application; in that document, the computational methods for carrying it out are described generically, with no detailed description of the procedures for detecting points of interest, as well as how to apply makeup to the facial image.
The document US7634108B2 entitled: AUTOMATED FACE ENHANCEMENT, published on December 15, 2009, proposes a system and process to improve the quality of the face of interlocutors in videoconferences; initially, the user must point out the face and eyes region and then they are tracked. While temporally tracked, using a regression method called Moving Average, the image of the face is improved by "cosmetic" effects such as skin color correction and makeup around the eyes; as mentioned, the invention depends on the user's initialization to know where the face and points of interest are within it, not being able to detect such elements automatically; the main objective of this invention, therefore, is to improve the image of the faces of videoconferencing users through makeup and color reconstruction processes; the method used for facial correction and improvement is the Gaussian mix model.
The document US7643659B2 entitled: FACIAL FEATURE DETECTION ON MOBILE DEVICES, deposited in January 2010, demonstrates the feasibility of applications for detecting points of interest by means of mobile devices, consequently making use of few computational resources to carry out the invention in an appropriate manner. .
There are also other well-known applications specially adapted from mobile devices with related functionalities, specified below: - LANCÔME MAKE-UP, an application available for Apple iPhone and, more recently for iPad and Android devices, which implements the Virtual Pallete module that allows the painting of a graphic illustration in simple strokes using fingers to apply Lancôme make-up products and later sending the finished image by e-mail or Facebook. The application offers the experience of experimenting with mixing colors and intensities of makeup products, appreciated by users in exploratory usability tests. However, the application does not allow the capture or application of makeup to images of faces of real people, not implementing features of: (a) image recognition; (b) automatic detection of points of interest; or (c) applying makeup to faces over real skin tones.
MAKEUP SIMULATOR, application available for Android, and more recently for Apple iPhone, allows the capture of photos or the use of example graphic illustrations for makeup application. Performs the analysis of the face image and the detection of points of interest (eyebrows, eyes, mouth, apples and face contour), allowing the correction of lines and the application of various makeup products. Exploratory usability tests of the application showed that users were unhappy with: (a) the fact that with each product selection they failed to see the face image; (b) absence of colors suitable for black skin tone; (c) despite the variety of colors available per product, there is no hint about harmonizing the products; (d) the fact of the automatic application of the makeup product, the color of which had been selected, with regulation of the intensity through the use of a scroll bar, not allowing the experimentation of mixing colors and intensities obtained in the real experience in applying makeup with your fingers. - MAKEUP, an application developed by ModiFace, available for Apple iPhone, claims similar functionality to that of the MakeUp Simulator application, despite its different graphical interface implementation. Allows you to capture a photo in the device's image gallery or images from your friends' photo galleries on Facebook. However, the application does not, in fact, implement image recognition and automatic detection of points of interest, leaving, instead, the user (s) to position standard mask elements (points of interest) of eyes, eyebrows and mouth on the image to make up. Exploratory usability tests of the application showed that users were unhappy with: (a) the difficulty of adjusting the mask of the points of interest on the image to be made up; (b) the fact that at each product selection they failed to see part of the face image; (c) the fact that there is a variety of colors available per product, but there is no hint about harmonizing products; (d) the fact that the automatic application of the makeup product, whose color had been selected, eliminates the experience of experimenting with mixing colors and intensities obtained in the real experience in applying makeup.
Other applications available for the web with related functionality have been identified, specified below:
The Natura S / A Company offers the functionality of virtual makeup on the website http://www.adoromaeauty.com.br/macaras-virtual/ and accepts images sent by the user or available on web pages. The application does not recognize the image and identify points of interest. Instead, it asks the user to place masks on these points (iris, eye contours, eyebrows, mouth, teeth, face). From then on, a range of products and colors is offered, according to the skin tone, according to the user's selection. Makeup is applied automatically. The application allows you to compare the effects of makeup, remove it and save the result. But don't post it on social networks, despite links to Natura's pages on Facebook, Twitter, Flickr, YouTube.
Taaz (http://www.taaz.com/makeover.html) offers Virtual Makeover functionality from an image in the standard gallery on the website or from a photo sent by the user, but this functionality does not seem to work and the application make-up to the gallery's photos does not require any image recognition method. Thus, the differential of the site is the graphic interface, the variety of products and colors and the indication of professional products related to the user's choices. The same is true on the Mary Kay website (http://www.marykay.com/whatsnew/virtualmakeover/default.as px) or in its Brazilian version http://www.marykay.com.br/vmo.html) , in which case product indications are restricted to a brand. Problems to be solved
Despite technological advances, handheld devices still have hardware with relatively limited resources to run algorithms that perform advanced calculations. Therefore, the present invention applied processing techniques in order to decrease the computational cost, allowing a reduction in the image processing time and minimizing the use of the processor of the portable device, in addition to saving memory, thus optimizing the use of resources. increasing the efficiency of the process described in this invention.
The conventional makeup process requires more time and resources than your computer simulation proposed by the present invention. In addition, the conventional process may require redoing the makeup, which is much simpler and faster in a computer simulation.
Adding to this the convenience of availability at any time and place, the use of portable devices makes it possible to save time and resources in the makeup simulation process.
The images obtained from the makeup simulation are easily stored and transmitted, as they are already in digital media, which facilitates diffusion and integration with other systems. Therefore, the makeup simulation process eliminates an intermediate step of capturing an image of the face after applying makeup, for the purpose of comparing the "before and after" type of the makeup process.
The use of computer simulation of makeup eliminates the need to transport a huge amount of items to test different makeups, since the user can simulate the desired effects to later physically perform the makeup, if she so wishes. This shows the flexibility and versatility to incorporate different types of makeup into the proposed process without the need to purchase products beforehand to test different makeups.
The method proposed in this invention also makes the makeup procedure more accessible and simple, for both laymen and professionals, helping to detect facial regions and apply makeup styles.
Thus, the proposed method allows combining ease of use, versatility, flexibility, saving time and resources, simplicity of storage and transmission, being an advantageous alternative or complement in relation to the conventional makeup process.
Technical / functional advantages of the invention
Versatility and flexibility; Use of methods to perform image processing applied to the makeup simulation on a captured or stored image of the user. In this way, it becomes possible to simulate different types of makeup on the portable device, reducing effort, cost and time spent on this process.
Low computational cost; In order to reduce processing time, programming practices were used that optimize the execution of computational instructions, such as using fixed point instead of floating point, using bit manipulation to perform some operations, using as few instructions as possible to sweep or copy the pixels of an image, among others.
Intuitive interface: The interface allows the use of hands / fingers to simulate the application of makeup on the face, being very intuitive for the user (o) of the conventional makeup process. Add to this the automation of the detection of face regions and makeup styles, bringing significant advantages over the conventional process. Interactive real-time simulation: The results of the user's interaction with the makeup simulation process are visualized in real time. The use of hands reproduces the physical interaction that occurs in the conventional process, reproducing a pleasant experience of experimenting with mixing and intensity of colors, with the advantage of being able to perform makeup in a simple way even for laymen in aesthetics, following partial results throughout of the process.
Ease of storage and diffusion: The results of the makeup simulation can be easily stored and transmitted, as they are already in digital media. This facilitates the dissemination of the results of the makeup simulation by the user. Add to that the possibility of creating a history of the makeups previously done, as the results can be easily stored.
Automatic detection of facial regions: The method of the present invention performs automatic detection of facial regions for the application of makeup. This detection can be used to define the regions affected by the application of makeup, thus avoiding the incorrect application of makeup.
Changing face regions: The regions automatically detected by the makeup simulation process can be changed by the user as desired. Thus, the efficiency of automatic detection is combined with the flexibility of adjustments by users.
Smoothing the contours of the makeup: The simulated makeup is automatically smoothed in the contours of the applied region, bringing greater realism and approaching the conventional makeup process (reference). This also facilitates use, even by laymen in aesthetics. Ease of redoing makeup: Removing makeup can be done much faster than in the conventional process, in addition to being substantially more economical. Add to this the possibility of storing previous stages of makeup and returning to them more quickly.
Choice of face models: The user can apply the chosen makeup to the face model they want, either the face model or another image stored with someone else's face. The portable device allows image capture to simulate makeup.
Choice of makeup styles: The proposed process provides different makeup styles, different product color palettes for each style, which allows the creation of a harmonious combination in makeup. This facilitates the use of the system and allows a more consistent interface for the user.
Self-makeup facility: Viewing the captured image itself makes self-makeup easier. The intuitive interface of the proposed process facilitates the simulation of self-makeup by laypeople, with the possibility of redoing makeup more easily.
Reduction of costs and time: The method proposed by the present invention reduces the cost necessary to simulate different types of makeup, since computer simulation does not require the purchase of makeup items. Makeup changes are also much faster with automatic detection of face regions and the use of predefined makeup styles.
Space saving and ease of transport: The possibility of storing countless styles with different palettes and makeup products allows you to simulate makeup using only a portable device that is equivalent to a set of physical items that would occupy space and would have less practical transport. Thus, a makeup simulation process is obtained that saves space and is more easily transportable than its material equivalent in the conventional process. Summary of the Invention
The present invention is a makeup simulation system using image processing, materialized through an integrated software and hardware solution that applies makeup effects and / or other graphic elements in real time by means of portable devices that have a digital camera. The system simulates makeup over static or dynamic images, using manual, semi-automatic or automatic identification of the face regions.
The present invention aims to provide a low-cost, efficient and effective solution, which allows users to simulate makeup over images of their own face or other images of stored faces, at any time and place they deem appropriate.
The greatest motivation found for the development of this invention is the users' manifest desire to experiment and simulate the application of different types of makeup, minimizing effort, cost and time spent. When choosing the appropriate makeup, the user usually applies several makeup products and changes the colors until the desired result is obtained. This process, seen as something attractive and capable of retaining users' attention, depends on the availability of time, makeup products and resources for its acquisition. Exploratory usability tests have shown that users expect to find current styles and colors appropriate to their skin tone and that they enjoy the experience of experimenting with color mixes and intensities provided by the very act of applying makeup. With the system developed by the present invention, this process becomes much more effective and efficient, offering the desirable user experience and providing a preliminary model before the physical application of makeup on the face.
In the following, a sequence of steps will be exemplified, using a materialization of the present invention, without restricting other possible sequences or materializations of using the method of the present invention. a) Initially, the user (or the user) captures the image of the face with the aid of a crosshair in preview mode. Detection of facial regions is performed automatically for subsequent application of makeup. b) After detection, the user has the possibility to modify the lines that delimit the automatically detected face regions. c) Next, the user selects one of several makeup styles that are reflected in the color palettes of the products to be applied on the face.
Several products are simulated (eg: foundation, blush, eyeshadow, lipstick, contour pencil, etc.), with their respective colors selected by the user, as well as their intensity and method of application on the face. d) The user has the possibility to use a touch-sensitive interface to define the positioning and intensity of the makeup colors for the different products. e) Subsequently, the user can switch between face images without makeup and with makeup 15 for comparison, and there is also the possibility of removing makeup to restart the process. f) If the user is satisfied with the result, they can store the image on the device locally or share it via e-mail or social networks 20 configured on the portable device.
In short, the system of the present invention allows the user to use portable devices to simulate makeup and / or manipulate captured or stored images.
The detection of points of interest (eyes, mouth, eyebrow, face contour) in the user's face image is automatic and adjustable, and the system allows the user to apply the makeup through the fingers on a screen sensitive to touch, while avoiding the "smudging" of makeup in its application with hand gestures. The makeup simulation is interactive, allowing the user to view the results as they interact with the system. In this way, an interactive, efficient and effective process for makeup simulation is obtained. Brief Description of Drawings
The objects and advantages of the present invention will become more evident from the detailed description of an example of an embodiment of the invention in the following section, and of figures attached as a non-limiting example, in which:
Figure 1 presents a generic model of the portable device where the system should work.
Figure 2 illustrates the system interface presented to the user (o), containing a crosshair to facilitate the framing of the face, and consequent position and distance from the device's camera.
Figure 3 illustrates the automatic identification of points of interest, as well as features for adjusting these points, which allow to outline the segmentation masks of the face regions.
Figure 4 illustrates the interface used for applying makeup.
Figure 5 shows the sequence of the three main 5 stages of the makeup simulation process.
Figure 6 shows the flow of the steps involved in the step "Location of the face regions".
Figure 7 illustrates the first step of the location of the face regions, the clipping of the 10-point region.
Figure 8 illustrates the location of the face, eyes and mouth.
Figure 9 shows the flow of the steps involved in the step "Obtaining points of interest from the face".
Figure 10 illustrates the steps to obtain the points of interest of the eyes.
Figure 11 illustrates the steps for obtaining points of interest from the mouth.
Figure 12 shows the detail of the mapping of points of interest in the mouth.
Figure 13 illustrates the steps to obtain the points of interest of the eyebrows.
Figure 14 shows the detail of the mapping of the points of interest of the eyebrows.
Figure 15 shows the directions in which the sliding window used to search for points of interest in the face contour mask moves.
Figure 16 shows the regions corresponding to the makeup products used in the preferred embodiment of the present invention.
Figure 17 shows the flow of creation of masks for applying makeup products.
Figure 18 illustrates the eye shadow masks.
Figure 19 shows the reference points and points used to create the masks for the left eye shadow.
Figure 20 shows the reference points and points used to create the masks for the shadow of the right eye.
Figure 21 shows the reference points and points used to create the masks for the left eye pencil.
Figure 22 shows the mascara application masks.
Figure 23 shows the lipstick application mask.
Figure 24 shows the mouth pencil application mask.
Figure 25 illustrates obtaining the mask used in the base application.
Figure 26 shows the masks used in applying blush.
Figure 27 shows the reference points and the points used to create the masks for applying blush on the left side of the face.
Figure 28 shows the flow used for applying makeup.
Figure 29 shows the selection of products used in the present invention.
Figure 30 shows the procedure used to create the mask for the region to be made up. Description of preferred embodiments of the invention
The present invention was materialized, by way of non-limiting example, in a system that simulates the makeup process in a digital photo obtained through portable devices equipped with a digital camera. From the system materialized by the present invention, it became possible to develop a method for previewing makeup, making the user to visualize the result of its application through a portable device and where through this you can try different makeups quickly and effectively. Thus, a preferred method of detecting points of interest on the face was obtained as the preferred embodiment of the invention, mapping the areas of the face where makeup is applied, using a touch-sensitive interface to paint the makeup, avoiding "smudging" "applied makeup and combines the color of the skin with the color of the product simulating the application of real makeup. The described method is executed in an integrated system of hardware and software.
The system materialized for this present invention can be executed in a portable device whose main characteristics are illustrated in Figure 1. The instructions used to activate the digital camera, location of the face in a photo, image processing 15 for segmentation of the face regions, among others, they are processed in the CPU (110) (Central Processing Unit). The camera (111) captures images in real time in the preview mode. Information and data, such as the 20-color palettes of the products used in the makeup and photos with the makeup applied, are saved in the storage medium (112). The information input devices (113) correspond to the part of the hardware that intercepts the keyboard events and the user's touch on the touch screen. The audio components (114) capture and reproduce sounds, and, in turn, form part of the hardware system of the present invention. The connectivity component (115) allows the user to use remote services, such as social networks and e-mails, to share and disseminate the images with applied makeup. To display the data and images presented by the portable device, the display medium (116) is used.
The system conceived from the present invention begins when it is executed by the user (o) through a portable device where it was previously installed. After it starts, the camera is activated and starts to shoot the frames at run time, showing them on the display (210) of the portable device at a certain display rate (eg: 30 EPS). It is in this first screen, illustrated in Figure 2, that the frames are captured and sent to the method that performs the location of the face. To optimize this process, a crosshair (211) was created where the user must fit her face. With this, the efficiency of the face localization process was improved, since the processing is carried out only within the crosshair region. By framing the face in the crosshairs, an appropriate proportion is achieved between the size of the photo and the region occupied by the face, reducing, in turn, the number of scales used to search the face. The framing of the face in the sight makes the user (o) position herself at a distance from the camera that provides a better application of makeup and a better view of it.
Many portable devices come equipped with two cameras, one rear and one front, where in general, the first has a higher quality and more features. However, with the front camera it is possible to simulate a mirror, since it is located on the same side of the display, which makes it possible to self-shoot. Taking advantage of this feature, the present invention offers the user (o) two capture options: 1) through the front camera, it is possible to use the portable device as a mirror, as the user can view the result your self-portrait before capturing it, and; 2) with the rear camera, photos are captured at higher resolutions.
After capturing the photo, the procedure is performed to locate the face and segment its regions. To assist in the segmentation of the facial regions, points of interest are obtained first. The system materialized by the present invention offers an interface for the user to adjust the regions of the face through points of interest. These adjustments are necessary to refine the result of the segmentation of the regions to be made up. The interface for making these adjustments is shown in Figure 3. This interface has two side bars 5 (310) and (311), where functions such as undo and redo are accessed in one bar, and functions for enlarging the other bar the image in a certain region of the face, such as mouth, eyebrow or eyes, increasing, in turn, the accuracy 10 of the adjustments made through the points of interest found.
From the points of interest found, polygons are formed, which are interconnected through the bezier interpolation, obtaining polygons that delimit the regions of the user's face 15. In (312), (313), (314) and (315) the polygons that represent the regions of the face, eyes, eyebrows and mouth respectively are presented.
After the necessary adjustments in the points of interest, 20 the necessary masks are then created to segment the regions of the face where the makeup will be applied.
Figure 4 represents the interface used for applying makeup, with two toolbars (410) and (411) containing several functionalities, such as product selection, color selection, style selection, redo / undo makeup actions, among others. After choosing the product and color to be used, the user performs the makeup application. This process is carried out from the touchscreen of the portable device, where it is possible to use your fingers to apply the makeup, mix the colors or increase the intensity of a certain product. The result of applying makeup is shown on the display of the portable device (412).
The method of the present invention is to perform virtual makeup. To be executed, this method locates the face and its constituent parts (eyes, eyebrows and mouth), segments the regions of the face, searches for points of interest in the regions of the face, creates masks to define the regions where the makeup will be applied, avoids the "blurring" of the make-up region, combines the colors of the skin with that of the product applied in order to provide a realistic effect to the makeup result, makes the application of the makeup using the fingers, making the simulation be carried out in a similar way to the process application of makeup and allowing you to view different styles of makeup efficiently and effectively.
The makeup simulation process of the present invention is basically divided into three main stages: "Location of the face regions (510)"; "Obtaining points of interest (511)"; "Application of virtual makeup (512)", illustrated in Figure 5.
The steps of "Locating face regions (510)" and "Obtaining points of interest (511)" are carried out right after the photo is taken and aim to find the user's face in the region of the sight, eliminating parts that do not belong to it, and, thus, optimizing the segmentation of the regions of the face.
The steps involved in the "Locating the face regions" step are presented sequentially in Figure 6. The first step in locating the face regions, illustrated in Figure 7, is to cut out the crosshair region (610), since the captured photo (710) has a lot of unnecessary information that can negatively interfere in this process. For this reason, only the region of the rectangle corresponding to the crosshair (711) is selected and the pixels present in this region are copied to a new image (712). This reduces the computational cost of the location of the face, in addition to increasing the accuracy of the detection process.
After selecting only the crosshair region, an attempt is then made to locate the face region (611) and the eye region (612) and the region of the mouth (613) of the user. For this, an area of Artificial Intelligence known as Machine Learning was used, which in literal translation means "Machine Learning". Machine Learning techniques use a collection of data to "teach the machine" to answer questions about it. The present invention used these techniques to verify the existence of a face in a digital image.
Machine learning techniques in general are divided into two phases: training and prediction. To perform the training, it is necessary to build a model of the data collection. The model consists of a set of mathematical representations with the characteristics of the data for learning. These mathematical representations are known as "signatures" or "features". During the execution of the training, the set of features is analyzed, and the weights and thresholds, among other parameters, are adjusted to maximize the learning process. The prediction will use the model generated in the training to make a decision or classify a data set.
To detect the face to be made up, we tried to use a Machine Learning technique capable of returning if the information present in a digital image is a face, efficiently and effectively. The technique used by the present invention employs binary classifiers (face or non-face) of the boosting type whose characteristics are: high detection and low rejection. Said technique consists in the use of a chain of weak rejection classifiers. The aforementioned method can be used to locate different types of objects, the purpose of which will depend on the training model to which it is submitted. For the present invention, previous training was used to detect frontal faces and another to detect eyes.
Using the aforementioned techniques, the face is located (810), as shown in Figure 8, selecting as the region of interest only the region defined by the classifier as being a face. In this way, it avoids processing areas that do not belong to the user's face. Then, a search for the eyes (811) is performed inside the face region. Eye detection is done similarly to face detection, changing only the training model used in the classification.
After obtaining the positions and dimensions of the face region and 20 of the eye regions, it becomes possible to estimate the mouth region (613) without using a classifier. This estimation is presented in (812), and is calculated from the proportion between the parts of the face, where the left side of the mouth region is aligned with the center of the left eye and its right side is aligned with the center of the right eye. Then, based on the result of the location of the eyes, the centers of the eyes are calculated to obtain the left side and the right side of the mouth region. It is also noted that the top of the mouth region is positioned in the middle between the position of the eyes and the chin, and the base of the mouth region corresponds to half between the distance from the top of the mouth region and the chin.
Equations 1 and 2 below present the calculations used to estimate the region where the mouth is located.


Equation 1 estimates the positions of the left and right side of the mouth, represented by mouth ^ and mouth ^ respectively. Through this equation, the center of the left eye and the center of the right eye are estimated, where Olhoexl and 15 OlhoeM represent the left and right limits of the left eye region and Olhoxi and Olhodx0 represent the left and right limits of the right eye region. Equation 2 calculates the positions for the top and bottom of the mouth region. On the y-axis, the top of the mouth region is represented by bocay0 and corresponds to the average between the positions of the top of the eyes, where, Olhoeyü is the top of the left eye region and olhodv0 is the top of the right eye. Also on the y-axis, the base of the mouth region is represented by bocayl, which corresponds to the sum of bocayü with half the distance between the base of the face and the top of the mouth region, where faceyi represents the base of the mouth region.
If the classifier does not find the face (813) or eyes (814) region in the input image, then this step is completed and a new image is requested (815), starting the process for applying the makeup again.
After obtaining the regions of the user's face, the stage for obtaining points of interest (511) begins. To calculate the points of interest, it is necessary to segment the parts of the face used for makeup.
The flow shown in Figure 9 presents the steps taken to obtain the points of interest necessary for the segmentation of the regions where the makeup will be applied. To perform the segmentation of the parts of the face, first, the image, which is in the RGB color space [Red - red, Green - green and Blue - blue), is converted to the color space YIQ (910), where Y stores the luminance information, which is the amount of light present in the image and represents the gray scale image, and components I and Q store the chrominance information and represent the colors.
The method used to convert the image from RGB to YIQ is presented in Equation 3, which performs the approximate conversion from RGB to YIQ.
ondeR, G, Be [0.1]
The first points of interest obtained are those of the eyes (911), whose method used to obtain them is shown in Figure 10. Channel I (1010) is used to segment the eye region, due to the great distance between the pixels of the skin and the pixels of the eyes, which facilitates their separation. In 1011 the image of the eye region is presented with information from channel I, where it is possible to notice the difference between skin tone and eye tone.
Seeking to decrease the computational cost, the resolution of the selected region (1012) is decreased, also improving the grouping of pixels for segmentation. The selected region will have its dimensions reduced by 50% of its size, which will cause a decrease in the time of execution of the process of obtaining the points of interest of the eyes.
Channel I values comprise a range between - 0.5957 and 0.5957. In order to reduce the number of floating point operations, the image (1013) is normalized to integer values between 0 and 255. Starting from normalization and conversion to integer values, the flexibility of the cut-off threshold used in the segmentation is also improved. The normalized image is shown in (1014).

The calculation used to normalize the image is presented in Equation 4, where x represents a column and y represents a line. The min and max functions return the lowest and highest values of an array respectively. I (x, y) contains the original pixels of channel I, and I '(x, y) receives the normalized values.
The segmentation of the eye region is performed by obtaining a binary mask (1015), where the portion of pixels with values equal to 1 (one) corresponds to the eyes of the user. To obtain the mask, it is necessary to perform a binarization process, which aims to segment the image in two colors. In 1016 an example of a mask obtained after binarization is illustrated.

To binarize the image, it is necessary to calculate a cut-off value that separates the regions to be segmented. This cut-off value is an intermediate shade of color and is known as the threshold. Obtaining the threshold to segment the eyes is presented in Equations 5 and 6. Equation 5 calculates the average of the eye region, where Me is the height and Ne is the width of the eye region. This average is calculated using the normalized channel I values, represented by I '(x, y). As the shades of the eye region are usually smaller than the rest of the image, Ie is obtained by applying a percentage of the average value obtained using (l-perc), as shown in Equation 6, in turn, obtaining the threshold adjusted to properly binarize the user's eyes. The perc variable is a normalized value between 0 and 1 that represents a percentage of the average (non-limiting example: perc = 0.25).

After obtaining the mask, many regions can be identified as false positives. Then it was defined that the region valid with the eyes is the one with the largest area (1017). To separate the different regions and obtain the area of each region, the contours of all the polygons present in the mask are sought and then the area of 10 each polygon found is calculated, leaving only the one with the largest area. The contours of the polygons are obtained through a method for tracking contours in binary images described in [8].

After carrying out the aforementioned process, the bounding rectangle of the area corresponding to the user's eye (1018) is then calculated. Equation 8 presents the calculation used to obtain the bounding rectangle of the eye, where min and max return the smallest and largest values of a vector, Xe represents the horizontal coordinates and Ye represents the vertical coordinates of the user's eye. (O). The coordinates of the bounding rectangle are stored in le (left), re (right), te (top) and be (base). The result of this operation is illustrated in (1019).
The points of interest of the eye are obtained (1020) from the bounding rectangle calculated in the process described in the previous paragraph. Each eye has four points of interest illustrated in (1021).

Equation 9 presents the calculation used to obtain the points of interest, where, point , point , point 0 and point 0 represent the coordinates of the points on the x-axis and pontoyo, point, pontoy2 and point „3 the coordinates of the points points on the y axis. The first point is located in the upper center of the bounding rectangle and the second point in the lower center. The third point and the fourth point are located in the left center and in the right center, respectively, of the bounding rectangle. From these points, the user's eye area is obtained, in addition to serving as a parameter for obtaining the regions around the eyes where the makeup will be applied.
Obtaining the points of interest in the mouth (912) is carried out by analyzing the pixels of the Q channel by performing a process similar to that used to obtain the points of interest in the eyes. The procedure for estimating points of interest in the mouth is shown in Figure 11.
In order to calculate the points of interest of the mouth, only the region of the mouth in the Q channel (1110) is selected. From the Q channel it is possible to segment the mouth area more easily due to the difference between skin tone and mouth tone, as can be seen in (1111). The image on the Q channel has a high level of unwanted artifacts, which makes it difficult to separate the mouth area from the rest of the image. In order to solve this problem, a low-pass filter is applied that reduces the contrast (1112) of the image and eliminates unnecessary elements that make it difficult to obtain points of interest. The filter used is the "Median" and the result of its application is presented in (1113).
The segmentation of the mouth region is performed by creating a binary mask that separates the mouth from the rest of the image (1114). The mask creation process is similar to the aforementioned process used to estimate the points of interest in the eyes. Binarization is carried out by executing 5 Equations 5, 6 and 7, and the parameters are changed so that the calculations are made in the mouth region. The result of the binarization of the mouth is exemplified in (1115).
The calculation of the contour of the areas obtained in the binarization is performed using the same method performed in obtaining 10 points of interest from the eyes, leaving only the contour of the largest region (1116). Then the bounding rectangle (1117) is calculated using Equation 8, the result of which is presented in (1118).
After obtaining the bounding rectangle, method 15 is performed, which estimates the points of interest of the mouth (1119). Figure 12 illustrates the mapping of points of interest in the mouth, which is performed inside the bounding rectangle of the mouth and uses the mouth mask to search for the coordinates of the points of interest, as described below: - The first point of interest (1210) corresponds to the coordinate of the first non-null pixel of the first column within the bounding rectangle;
The second point of interest (1211) is represented by the coordinates of the lower center of the bounding rectangle; - The third point of interest (1212) corresponds to the coordinates of the first non-null pixel of the last column within the bounding rectangle;
The fourth point of interest (1213) is calculated using a scan line with a 45 ° orientation and starting at the center of the bounding rectangle. This point corresponds to the coordinates of the last non-zero pixel of the scan line. - The fifth point of interest (1214) is estimated from a scan line that starts at the center of the bounding rectangle and has a 90 ° orientation. The coordinate of the last non-zero pixel of this scan line corresponds to the fifth point of interest in the mouth. - The sixth point of interest (1215) corresponds to the last non-zero pixel of a scan line that starts in the center of the bounding rectangle and has a 135 ° orientation.
After acquiring the points of interest in the mouth, the method used to obtain the points of interest for the eyebrows is then performed (913). This is shown in Figure 13 and in turn is performed similarly to the method of obtaining points of interest from the eyes and mouth.
To estimate the points of interest of the eyebrows, a region is selected between the eyes and the forehead of the user (o) that corresponds to half between the distance of the eyes and the top of the face. The process used to calculate the points of interest of the eyebrows analyzes the pixels of the Y channel, which corresponds to the gray levels of the image (1310). The region selected to be processed is shown in (1311). To obtain the points of interest of the eyebrow, it is necessary to create a mask that separates the eyebrow from the rest of the image (1312). For this, the same methods for binarization were used in the search for points of interest in the eyes and mouth, but with altered parameters for the eyebrow region. The result of the eyebrow binarization is illustrated in (1313). Analogously to the aforementioned methods, the contours of the polygons present in the mask are obtained, and then the largest region is selected (1314). The result of this operation is presented in (1315). Then, the bounding rectangle of the obtained area (1316) is obtained, the result of which is illustrated in (1317). The procedure for estimating the points of interest (1318) is performed using the bounding rectangle of the area of each eyebrow, illustrated in Figure 14. This procedure is described below and demonstrates how the points of interest of the eyebrows are obtained: - O first point of interest (1410) corresponds to the coordinate of the first non-null pixel of the first column within the bounding rectangle of the left eyebrow; - The second point of interest (1411) corresponds to the coordinate of the first non-zero pixel of the last column within the bounding rectangle of the left eyebrow; - The third point of interest (1412) corresponds to the coordinate of the first non-null pixel of the first line within the bounding rectangle of the left eyebrow. Due to different eyebrow shapes, the coordinate of the third point could be estimated incorrectly. To solve this problem, a minimum distance was defined between this point and the other points of the left eyebrow. If the distance between the third point is less than the defined minimum distance, then the relative coordinate of the centroid of the different null pixels is assigned; - The fourth point of interest (1413) is calculated similarly to the first, but uses the right eyebrow bounding rectangle; - The fifth point of interest (1414) is obtained in a similar way to the second, but uses the left eyebrow bounding rectangle; - The sixth point of interest (1415) is calculated similarly to the third, but the scan line is traversed from right to left.
After estimating the points of interest for the eyebrows, the process of obtaining points of interest for the face begins (914). The points of interest obtained for the eyes and mouth are used to remove the remaining parts of the user's face from the region of interest, in addition to serving as parameters for the method that estimates the points of interest of the face. .
Before starting the image analysis, the region between the eyebrows and the mouth is removed, facilitating, in turn, the mapping of the face. Then the Sobel filter is applied in the vertical and horizontal direction in order to extract the edges of the face. In order to search for points of interest, a sliding window was used that moves according to the directions shown in Figure 15. The coordinate of each point of interest corresponds to the position of the first window whose difference between its pixels is greater than a predefined threshold and that is located outside the region between the eyebrows and the mouth. All points use basically the same method, changing only the parameters related to the orientation and the reference point, where the orientation defines the direction of the scan line and the reference point establishes the initial position of the scan line. The parameters used to estimate each point of interest on the face are described below: - The first point of interest on the face (1510) is based on the point of interest of the eye represented by 1518 and the orientation of its scan line is the same 150 °; - The second point of interest of the face (1511) uses the point of interest of the eye, represented by 1519, as a reference and has a scan line with an orientation equal to 60 °; - The third point of interest of the face (1512) has as reference the point of interest of the eye represented by (1520) and its scanning line has an orientation equal to 170 °; - The fourth point of interest of the face (1513) uses the point of interest of the eye represented by (1521) as a reference point, where its scanning line has a 10 ° orientation; - The fifth point of interest on the face (1514) has as its reference point the point of interest of the mouth represented by (1522) and the orientation of its scan line is equal to 200 °; - The sixth point of interest of the face (1515) uses the point of interest of the mouth represented by (1523) as a reference point and has a scan line with orientation equal to 340 °; - The seventh point of interest of the face (1516) has as a reference point the point of interest of the mouth represented by (1524) and its scan line has 270 ° orientation; - The eighth point of interest on the face (1517) is based on the point represented by (1525), which corresponds to the center of the distance between the points of interest on the eyes (1518) and (1519). The scan line orientation of this point is equal to 90 °.
In order to improve the makeup result, after obtaining all the points of interest in the face regions, the adjustment phase begins. In this step, the user must manually correct the position of the estimated points, if they are not in accordance with the user's wishes.
Points of interest are used to form polygons that are interconnected through bezier interpolation, in addition to calculating the regions where makeup will be applied. It should be noted that the option of adopting the bezier interpolation mechanism concerns the need to connect the points of interest of the masks using curved lines, as in real faces. This is done by simulating vector drawing applications, which connect vertices through edges and use the bezier interpolation to define the curvature of these edges. The process used in this invention is still a vectorization of parts of the face.
After finding all points of interest, the last stage of the process called "application of virtual makeup" (512) begins, which is the stage responsible for applying makeup in the regions of interest.
To apply makeup, the present invention uses the mapping of the user's fingers through the touchscreen of the portable device. Through this, it is possible to obtain the coordinates of the position that the user touched the screen.
The makeup is applied through the interaction of the user with the device, where, through touch and gesture of dragging the finger over the region to be made up, the user simulates the real application of makeup. As the user passes the finger over the region that simulates makeup, the intensity of the selected color in the makeup region increases.
The materialized application as an embodiment of the invention object of the present invention employs parallel processing techniques to effect the processing of information necessary for the combination of the skin color and the colors of the makeup products.
In this way, it is possible to intercept the user's touch on the screen and at the same time process other regions that have already been touched, improving the final result of the makeup and the usability of the application.
Each makeup product has different colors that can be combined, in addition to having different characteristics with regard to the way it is applied on the face. For this reason, it is necessary to create masks that identify the regions to be made up for each product.
The creation of these masks prevents the "smudging" of the makeup in its application from the user's fingers, in addition to defining the correct size of each region to be made up.
In general, the creation of the masks is done by calculating the coordinates at the points (x, y). These points are obtained using the equation of the line, where the angular coefficient is equivalent to the displacement in relation to the starting point.
The mapped regions correspond to simulated products
by the present invention. These products are eyeshadow (1612), eye pencil (1613), lipstick (1614), mouth pencil (1615), foundation (1616) and blush (1617). Figure 16 shows the regions corresponding to the makeup products used among the preferred embodiments of the present invention.
In addition to the mask that defines the region where the makeup will be applied, the present invention creates another mask that represents the maximum level of color intensity, which is nothing more than a limit of color intensity that the makeup can reach. Masks that define maximum values of color intensity for each region / product are necessary so that the combination of skin tones and colors of makeup products is not saturated and that the contours of the make-up regions are smoothed, showing a gradient that is present in real makeups.
The masks used to perform the makeup are created from the flow shown in Figure 17. The first masks created are those of the shadow (1713). For the application of eyeshadow, 3 (three) masks are used: one represents the eye region; another the region of application of the shadow; and the last one the maximum values of color intensity, as illustrated in Figure 18.
The eye mask (1811) is created through the polygon formed by the interconnection of the points of interest of the eyes. These points are interconnected by means of bezier interpolation, forming a smooth curve for each edge of the polygon, then the interior of this polygon is filled. The result is presented in 1812.
The shadow region mask (1813) is created similarly to the first, but the points (vertices) of the polygons that form this mask are calculated from the points of interest obtained for the eye and eyebrow regions. In general, the displacement and orientation of the entry points are calculated to obtain the mask points. For the mask used in applying eye shadow, 7 (seven) reference points are used as input.
The shadow mask of each eye introduces 4 (four) points (vertices) that are obtained from 3 (three) points of interest of the eyebrow and 4 (four) of the eye, on the right or left side. The points of the eye shadow mask, points 7 (1918), 8 (1919), 9 (1920) and 10 (1921) are obtained from points of interest around the eye, represented by points 0 (1911), 1 (1912), 2 (1913) and 3 (1914), and the points of interest of the eyebrow, represented by points 4 (1915), 5 (1916) and 6 (1915). THE
Figure 19 illustrates the first step in obtaining the points needed to calculate the left eye shadow mask.
In this step, the points are obtained from the starting point and the displacement: - point 7 has as its starting point point 4 and the displacement is 30% difference between point 4 and point 0; - point 8 starts at point 5 and the displacement is 40% difference between point 5 and point 3; - point 9 starts at point 6 and the displacement is equal to 60% of the difference between point 6 and point 2; and - point 10 starts at point 1 and the displacement is 50% of the height of the eye.
Then, the slope of the points is calculated: - point 7 has a slope of 9n / 8; - point 8 has an inclination of 5n / 4; - point 9 has a slope of -π / 8; and - point 10 has an inclination of 5n / 8.
These procedures are necessary to calculate the points of the left eye shadow mask. To calculate the right eye, the same procedure is performed, but the points are arranged symmetrically as in a mirror, as shown in Figure 20.
The orientation of the slope of the points is also inversely symmetrical. For the right eye: - point 7 has a slope of -n / 8; 5 - point 8 has an inclination of 7n / 4; - point 9 has a slope of 9π / 8; and - point 10 has a slope of 3n / 8.
The points obtained are interconnected and the entire interior of the obtained polygons is filled in the application of the shadow, 10 except the part corresponding to the contours of the eyes. The result is presented in 1814.
The third mask is the result of applying the Gaussian filter to the second mask, which has the effect of smoothing the outline of the shadow application mask and 15 is illustrated in 1816.
The method offers flexibility, since it makes it possible to offer the user the experience of mixing colors when applying a product, in addition to the possibility of adopting different strategies in the application of different colors of eyeshadow to different regions of the eyes , enriching the usability of the application.
The mask for applying the eye pencil (1712) is created in a similar way to the eye shadow mask.
The application of the eye pencil aims to enhance the contour of the eye. To create the eye pencil mask, the displacement of the points is calculated to form the region corresponding to its potential application, since it is up to the user to decide the segment of the application area that she wants to apply (upper eyelid contour) , from the lower, outer or inner corner of the eye).
Figure 21 illustrates the points used to create the left eye pencil mask. Points 0 (2110), 1 (2111), 2 (2112) and 3 (2113) are the points of interest of the eye. Points 4 (2114), 5 (2115), 6 (2116) and 7 (2117) are used in the creation of the eye pencil mask and its coordinates are obtained from the displacement of the reference points of the eye: - the point 4 has coordinates equal to the sum of the coordinates of point 0 plus 10% of the width of the eye; - point 5 has coordinates equal to the sum of the coordinates of point 1 plus 10% of the height of the eye; - point 6 has coordinates equal to the sum of the coordinates of point 2 plus 5% of the width of the eye; - point 7 has coordinates equal to the sum of the coordinates of point 3 plus 10% of the height of the eye.
Then, points 4, 5, 6 and 7 are interconnected according to bezier interpolation, forming a curved edge polygon. The region bounded externally by this polygon and internally by the polygon formed by the points of interest that define the eye contour is the region where the eye pencil is applied.
The pencil mask of the right eye is obtained in a similar way, but the points are arranged symmetrically inverted in relation to those of the left eye. The touch of the fingers defines the region where the eyeliner is effectively applied and then a filter is applied to smooth the edges of the line, making it closer to the effect obtained with real makeup. The filter used is again Gaussian, but with a small dimension (3x3). Figure 22 shows the masks necessary for applying the eye pencil, where 2211 is the final mask used.
Then, the procedure that creates the mask for the lipstick (1713) begins. For applying lipstick, you only need a mask that defines your region, as it is possible to map the touch of the fingers and define the maximum level of shades to be applied with a single mask. This mask is created using the points of interest obtained for the mouth as a reference. From the points of interest in the mouth, a polygon is created that corresponds to the interconnection of the points by means of bezier interpolation to draw the curvatures of the mouth contour.
The gray levels of the pixels that make up the mouth are also taken as a reference, which will define the maximum levels of lipstick intensity, allowing the use of a single mask for your application. Figure 23 illustrates the mask used to apply the lipstick, where 2311 is the region of the mouth with the gray levels used to define the maximum level of shades.
To create the mouth pencil application mask (1714), the points of interest in the mouth contour are interconnected using the bezier interpolation. The line connecting the points has a thickness that corresponds to the thickness of the actual generic mouth pencil.
Similar to eyeliner, the contours of the mouth pencil mask are smoothed using the Gaussian filter. Figure 24 illustrates the mouth pencil mask represented by 2411.
The eye pencil and the mouth pencil use the same mask mechanism to map the user's touch and to define the maximum level of application of color tones.
The following procedure describes the creation of the mask for the base (1714). The mask used for the application of the base considers the points of interest on the face that form a polygon that corresponds to the contour region of the face, as shown in Figure 25. The polygon is created by interconnecting the points of interest on the face with resulting curved lines of bezier interpolation and is represented by 2511. The application of the base corresponds to fill the region of the polygon, from which the regions of the mouth, eyes and eyebrows are removed.
That is, to create the base mask, the positions of the points of interest of the eyes and mouth are considered, but the pixels corresponding to the same regions are disregarded, since the regions of the eyes and mouth are eliminated from the application mask. of the base, making it necessary to perform a processing in the face region to create the base mask (2112).
The eye and mouth regions are removed by mapping their location through points of interest that define their contours. To remove the eyebrow region, the location of the eyebrows is mapped through their points of interest, but the points of interest of the eyebrows form lines and not polygons. Then the region around the line formed by the points of interest is mapped, distinguishing what is skin and what is eyebrow. This is done using an adaptive binarization technique that calculates the threshold that separates the tones from the skin and eyebrow pixels. Then the Gaussian filter is applied again, smoothing the edges of the mask, as illustrated in 2513. This mask is used to map the touch of the user and define the maximum level of application of color tones.
The last masks created are from the blush region (1716). The blush is mapped in the region of the cheekbones (region over the zygomatic bones and just below them). To obtain the blush application regions, it is necessary to estimate 3 (three) reference points on each side of the face.
Figure 26 shows the necessary masks for applying the blush. The way of obtaining these masks is similar to the way in which eye shadow masks are obtained. To obtain it, the displacement between the reference points is first calculated. Figure 27 illustrates the reference points for the blush application mask on the left side of the face. Points of interest 0 (2710), 1 (2711), 2 (2712) and 3 (2713) are used as a reference to calculate points 4 (2714), 5 (2715) and 6 (2716), which define the mask blush application.
The following describes how the blush application mask is obtained: - Point 4 is obtained as follows: the vertical distance between points 0 and 1, which are points of interest for the eye and mouth, is calculated (inner corner of the eye and outer corner of the mouth) and an equivalent displacement of 40% of this distance is applied from the starting point (point 0); - The coordinates of point 5 are calculated using point 2, which is a point of interest on the face contour, and point 4 obtained previously as a reference. Point 5 is shifted from point 2 by 10% of the distance between points 2 and 4; - Point 6 is calculated similarly to point 5. Its coordinates are calculated from point 3, which is another point of interest in the face contour. Point 6 is shifted from point 3 by 10% of the distance between points 3 and 4; - Points 4, 5 and 6 are interconnected using the bezier interpolation, forming a polygon that is the mask to be filled in when applying the blush.
The described method is used to create the blush mask for the left side of the face, and the mask for the right side of the face is obtained in the same way, however the points are arranged symmetrically inverted in relation to those on the left side.
The mask that defines the maximum intensity levels of the makeup application is obtained by applying a Gaussian filter over the application mask. The size of the filter is equal to the dimension of the largest side of the obtained polygon. The resulting masks are shown in Figure 26, where 2611 is used to map the user's touch and 2613 stores the values with the maximum intensity levels for the blush.
After obtaining all the masks, it is possible to simulate the makeup and in turn map the region of each product and avoid the "blurring" and "saturation" of the makeup region. The masks serve as a parameter for the functions that perform the makeup application.
The last stage of the makeup process consists of the user's interaction with the mobile device to simulate the makeup application process itself. To perform this process, the user selects the desired product (2810), the color of this product (2811) and then, using the touchscreen and gestural movements, paints the face region (2812) where you want to apply makeup.
Each makeup product is responsible for painting a specific region of the user's face. Seeking to gain prominence on the region of the selected product, the present invention makes an approximation on this region, highlighting the selected product. The approximation of the region to be made up also improved the usability of the makeup paint, as the user's touch area for makeup application was increased. Figure 29 illustrates the result on the mobile device's display of the selection of each product.
After selecting the product and its color, the user (o) should run her finger over the region she wants to make up through the touch screen for the makeup to be applied.
In general, the interception of the user's touch is performed by an interruption controlled by the operating system of the mobile device. In turn, the operating system places the result of this operation in its event queue which is consumed in the main thread of the current application. Therefore, when some processing is carried out that can consume a considerable slice of computational time, many events can be lost, causing the result of the procedure to be impaired. An example of this is that when the user is painting the makeup through the touch screen, it is necessary to intercept the positioning of the finger on the screen. If after the interception there is a procedure that consumes a lot of computational time, many points will be lost, since the event queue can only be consumed after the
To start the makeup painting stage, at least points are needed, where these are equivalent to the position of the user's finger on the touchscreen of the mobile device. When the user starts painting the makeup, the procedure will construct the painting lines (3010) every two points. An example of the painting line built is illustrated in 3011. The first line will start with the user's touch starting position. The rest of the lines will have as starting point 10 the end point of the previous line.
Each makeup product uses different tools to perform face painting. For example; The blush brush is larger than the brush used to apply eye shadow. Because of this, the user's touch with the blush will have a greater thickness than the touch of the eye shadow. Therefore, each product has different thicknesses for its painting line ■
The procedure should paint the makeup 20 only in the region of the selected product. So, before combining the colors, it is necessary to check if the painting line corresponds only to the region defined by the selected product. To perform this verification, the masks that were created previously are used to complete this procedure.
The procedure described by the present invention performs all the processing performed to perform the makeup on a separate thread, releasing the main thread so that it can consume the events. At the moment, the touch is intercepted, its position is captured and stored in a row that is used by the thread that performs the makeup painting.
In general, the painting of the region to be made up is started when the movement of the user's touch is detected on the screen of the mobile device. This happens when at least 2 (two) points are intercepted, starting the process that performs the combination between the skin color and the color of the product simulating the application of real makeup.
The procedures used to perform the makeup of each product are performed in a similar way.
One of the steps used by all products is the creation of the painting line. This line is a small mask that defines which pixels of the skin will be combined with the color of the selected product. The process for creating the painting line is described by the flow illustrated in Figure 30. (3012). This step will keep only the pixels that belong to the painting line and the mask of the selected product. In 3013 the relationship between the mask and the painting line is presented.
A new painting line is created (3014) and it presents the valid region for painting. In 3015 the result of the line is illustrated after removing the invalid pixels.
To simulate the application of makeup, it is necessary to apply a low-pass filter to smooth the line and decrease the pixel values of the painting line (3016). Then the Gaussian filter is applied to fade the painting line, creating the mask of the region to be painted. The result is presented in 3017 and it is used by the procedure that combines the color of the product and the skin of the face to simulate the makeup. The size of the convolution window and the level of blurring (sigma) of the Gaussian filter are different for each type of product, so that the effect of the simulation is similar to the effect of the application of the real product.
The procedure that effectively simulates face makeup uses the face image, the mask with the maximum intensity levels, the mask of the region to be painted and an accumulator, whose features are described below: - The face image corresponds to photograph captured through the mobile device's digital camera. The makeup simulation is performed on this image, which in turn is used as an input and output parameter for the procedure that applies the makeup. - The mask with the maximum intensity levels defines the maximum level that the color of the product can reach, making the combination of the skin color and the product not to be saturated, not allowing the painting of the makeup to be opaque and making this has an effect similar to the application of real makeup. - The mask of the region to be painted defines the color intensity and the pixels to be painted. - An accumulator element that is a matrix of the size of the input image, which keeps the current makeup intensity.
Each time the user interacts with the mobile device through the touch screen, the procedure for painting will be started and the region of the makeup that was touched will be painted and the intensity of the current color is added to the accumulator. If the value of the accumulator pixel plus the intensity of the current color is greater than the corresponding value in the mask with the maximum levels of intensity, then the combination between the product color and the skin color will not be made for this pixel.

Equation (9) presents the formula used to match the color of the skin with the color of the producer. The matrix I (x, y) corresponds to the image of the face and the matrix A (x, y) corresponds to the region to be made up, which has the intensity of the colors on each pixel. The color variable is the color of the selected product.
At any time during the makeup simulation process, the application offers the possibility to compare its "before" and "after" the makeup application, as well as the option to remove it.
When the user considers the make-up simulation completed and satisfactory, the application allows the image of the makeup face to be saved on the device or shared via bluetooth, e-mail or social networks. If you don't like the result, the user has the options to restart the process or simply delete the final image.
Although a preferred embodiment of the present invention is shown and described, those skilled in the art have understood that various modifications can be made without
权利要求:
Claims (19)
[0001]
1. Method for simulating makeup on a user's face in a portable device equipped with a digital camera characterized by the fact that it comprises the following steps: - locate, by a processor, the face and constituent parts of the face; - segment face regions; - search for points of interest in the face regions; - create masks to define the regions; - receive a gesture, by a finger of the user, to apply makeup to the regions with masks; avoid “blurring” the region with applied makeup; and - combining face colors with applied makeup, to provide a realistic effect to applied makeup; - in which the makeup is applied through a user interaction with the device, where, through touch and gestural movement of dragging a finger over a region where the makeup is applied, the user simulates a real application of makeup, in which the As the user passes the finger over the region an intensity of a color of the makeup in the region is increased.
[0002]
2. Method, according to claim 1, characterized by the fact that the segmentation of the face regions comprises: - cutting out a crosshair region that includes only the face by selecting only one rectangle corresponding to the crosshair and copying the pixels in the rectangle for a new image; and - find a region of the user's eyes and mouth in the new image.
[0003]
3. Method, according to claim 2, characterized by the fact that it additionally comprises converting the image from an RGB color space, to a YIQ color space, where the Y component stores the luminance information, and the components I and Q store the chrominance information.
[0004]
4. Method, according to claim 3, characterized by the fact that it finds the eye region comprises analyzing the pixels of the Q channel due to the difference between the color of the face and a color of the mouth.
[0005]
5. Method, according to claim 2, characterized by the fact that the mouth region is found creating a binary mask that separates the mouth from the rest of the new image.
[0006]
6. Method, according to claim 1, characterized by the fact that it additionally comprises adjusting points, in which the user manually corrects a position of the points of interest.
[0007]
7. Method, according to claim 1, characterized by the fact that the user modifies the lines that delimit the delimited regions of the face.
[0008]
8. Method, according to claim 1, characterized by the fact that the user selects one of several makeup styles reflected in the color palettes of the products to be applied on the face.
[0009]
9. Method, according to claim 1, characterized by the fact that the user defines a positioning and intensity of the makeup colors for the different products.
[0010]
10. Method, according to claim 1, characterized by the fact that the user can switch between a face image without makeup and a face image with makeup applied for comparison, and removes the makeup applied.
[0011]
11. Method, according to claim 1, characterized by the fact that one of the face with the applied makeup can be stored on the device locally or shared.
[0012]
12. System for simulating the application of makeup on a portable device equipped with a digital camera and a touch screen characterized by comprising: - the digital camera capturing an image in real time in the preview mode; - for a storage unit to store information and data, including the color palettes of products used in makeup and photos with makeup applied; - an information entry device that intercepts keyboard events and a user's touch on the touchscreen; - an audio component that captures and reproduces sounds; - a connectivity component that connects remote services, including social networks and e-mails; and - a display to display the data and images presented with the makeup applied by the portable device, in which the makeup is applied through a user interaction with the device, where, through touch and gestural movement of dragging a finger over a region where the makeup is applied, the user simulates a real application of the makeup, in which as the user passes the finger over the region an intensity of a color of the makeup in the region is increased.
[0013]
13. System, according to claim 12, characterized by the fact that the digital camera comprises two cameras, one rear and one front, in which the front camera has a higher quality.
[0014]
14. System, according to claim 12, characterized by the fact that makeup is applied by mapping a user's finger on the touchscreen of the portable device.
[0015]
15. System, according to claim 12, characterized by the fact that masks that identify the regions to be made up for each product are provided.
[0016]
16. System, according to claim 15, characterized by the fact that the creation of these masks prevents the “smudging” of the makeup in its application from the user's fingers, in addition to defining the correct size of each region to be made up.
[0017]
17. System, according to claim 15, characterized by the fact that the mapped correspond to simulated makeup products comprise of eyeshadow, eye pencil, lipstick, mouth pencil, foundation and blush.
[0018]
18. System, according to claim 15, characterized by the fact that it also comprises a mask that represents a maximum level of color intensity, representing a limit of color intensity that the makeup can reach.
[0019]
19. System, according to claim 12, characterized by the fact that each makeup product uses different tools.
类似技术:
公开号 | 公开日 | 专利标题
BR102012033722B1|2020-12-01|system and method for makeup simulation on portable devices equipped with digital camera
JP2020526809A|2020-08-31|Virtual face makeup removal, fast face detection and landmark tracking
JP6368919B2|2018-08-08|Makeup support device, makeup support method, and makeup support program
US9760935B2|2017-09-12|Method, system and computer program product for generating recommendations for products and treatments
US9142054B2|2015-09-22|System and method for changing hair color in digital images
CN105184249B|2017-07-18|Method and apparatus for face image processing
CN104205168B|2017-03-22|Makeup application assistance device, makeup application assistance method, and makeup application assistance program
US8620038B2|2013-12-31|Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US8265351B2|2012-09-11|Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US8660319B2|2014-02-25|Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
JPWO2015029372A1|2017-03-02|Makeup support device, makeup support system, makeup support method, and makeup support program
JPWO2015029371A1|2017-03-02|Makeup support device, makeup support method, and makeup support program
KR20090098798A|2009-09-17|Method and device for the virtual simulation of a sequence of video images
US10217265B2|2019-02-26|Methods and systems of generating a parametric eye model
TWI573093B|2017-03-01|Method of establishing virtual makeup data, electronic device having method of establishing virtual makeup data and non-transitory computer readable storage medium thereof
US20200020173A1|2020-01-16|Methods and systems for constructing an animated 3d facial model from a 2d facial image
JPWO2018221092A1|2020-04-02|Image processing apparatus, image processing system, image processing method, and program
US10217275B2|2019-02-26|Methods and systems of performing eye reconstruction using a parametric model
CN108986016B|2021-04-20|Image beautifying method and device and electronic equipment
JP6128356B2|2017-05-17|Makeup support device and makeup support method
KR102304674B1|2021-09-23|Facial expression synthesis method and apparatus, electronic device, and storage medium
JP2016066383A|2016-04-28|Makeup support device and makeup support method
Chiang et al.2018|Generation of Chinese ink portraits by blending face photographs with Chinese ink paintings
US10803677B2|2020-10-13|Method and system of automated facial morphing for eyebrow hair and face color detection
Rosin et al.2017|Watercolour rendering of portraits
同族专利:
公开号 | 公开日
US20130169827A1|2013-07-04|
BR102012033722A2|2015-05-12|
US8908904B2|2014-12-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6937755B2|2000-06-27|2005-08-30|Rami Orpaz|Make-up and fashion accessory display and marketing system and method|
US7634103B2|2001-10-01|2009-12-15|L'oreal S.A.|Analysis using a three-dimensional facial image|
FR2861177A1|2003-10-16|2005-04-22|Oreal|ASSEMBLY COMPRISING A SYSTEM FOR ANALYZING THE CLARITY LEVEL OF THE SKIN AND THE RANGES OF COSMETIC PRODUCTS|
TWI227444B|2003-12-19|2005-02-01|Inst Information Industry|Simulation method for make-up trial and the device thereof|
US8107672B2|2006-01-17|2012-01-31|Shiseido Company, Ltd.|Makeup simulation system, makeup simulator, makeup simulation method, and makeup simulation program|
US8620038B2|2006-05-05|2013-12-31|Parham Aarabi|Method, system and computer program product for automatic and semi-automatic modification of digital images of faces|
GB2451050B|2006-05-05|2011-08-31|Parham Aarabi|Method, system and computer program product for automatic and semiautomatic modification of digital images of faces|
US8660319B2|2006-05-05|2014-02-25|Parham Aarabi|Method, system and computer program product for automatic and semi-automatic modification of digital images of faces|
US8077931B1|2006-07-14|2011-12-13|Chatman Andrew S|Method and apparatus for determining facial characteristics|
JP5432532B2|2008-01-22|2014-03-05|株式会社資生堂|Makeup method, makeup simulation apparatus, and makeup simulation program|
NZ587843A|2008-02-14|2012-10-26|Gregory Goodman|System and method of cosmetic analysis and treatment diagnosis|
US8491926B2|2008-09-16|2013-07-23|Elc Management Llc|Method and system for automatic or manual evaluation to provide targeted and individualized delivery of cosmetic actives in a mask or patch form|
US9064344B2|2009-03-01|2015-06-23|Facecake Technologies, Inc.|Image transformation systems and methods|
US8550818B2|2010-05-21|2013-10-08|Photometria, Inc.|System and method for providing and modifying a personalized face chart|US8007062B2|2005-08-12|2011-08-30|Tcms Transparent Beauty Llc|System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin|
US8184901B2|2007-02-12|2012-05-22|Tcms Transparent Beauty Llc|System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image|
WO2013104970A1|2012-01-10|2013-07-18|Koninklijke Philips N.V.|Image processing apparatus|
US9668653B2|2012-03-15|2017-06-06|Access Business Group International Llc, A Michigan Limited Liability Company|Quantification of under-eye skin color|
US9118876B2|2012-03-30|2015-08-25|Verizon Patent And Licensing Inc.|Automatic skin tone calibration for camera images|
JP5970937B2|2012-04-25|2016-08-17|ソニー株式会社|Display control apparatus and display control method|
US9224248B2|2012-07-12|2015-12-29|Ulsee Inc.|Method of virtual makeup achieved by facial tracking|
US20140176548A1|2012-12-21|2014-06-26|Nvidia Corporation|Facial image enhancement for video communication|
JP6008323B2|2013-02-01|2016-10-19|パナソニックIpマネジメント株式会社|Makeup support device, makeup support method, and makeup support program|
US9122914B2|2013-05-09|2015-09-01|Tencent TechnologyCo., Ltd.|Systems and methods for matching face shapes|
WO2014190509A1|2013-05-29|2014-12-04|Nokia Corporation|An apparatus and associated methods|
US9542736B2|2013-06-04|2017-01-10|Paypal, Inc.|Evaluating image sharpness|
CN104240277B|2013-06-24|2019-07-19|腾讯科技(深圳)有限公司|Augmented reality exchange method and system based on Face datection|
GB2518589B|2013-07-30|2019-12-11|Holition Ltd|Image processing|
CN103489107B|2013-08-16|2015-11-25|北京京东尚科信息技术有限公司|A kind of method and apparatus making virtual fitting model image|
JP6435516B2|2013-08-30|2018-12-12|パナソニックIpマネジメント株式会社|Makeup support device, makeup support method, and makeup support program|
US10438265B1|2013-09-23|2019-10-08|Traceurface, LLC|Skincare layout design, maintenance and management system and apparatus|
WO2015127394A1|2014-02-23|2015-08-27|Northeastern University|System for beauty, cosmetic, and fashion analysis|
EP3111305A4|2014-02-27|2017-11-08|Keyless Systems Ltd|Improved data entry systems|
JP6331515B2|2014-03-13|2018-05-30|パナソニックIpマネジメント株式会社|Makeup support device and makeup support method|
US9867449B2|2014-06-20|2018-01-16|Richard Joseph LaHood, SR.|System and method of manufacturing a three-dimensional cosmetic product|
TWI676050B|2014-10-17|2019-11-01|美商壯生和壯生視覺關懷公司|Virtual mirror systems and methods for remotely training and/or supporting contact lens users|
US10089525B1|2014-12-31|2018-10-02|Morphotrust Usa, Llc|Differentiating left and right eye images|
US9846807B1|2014-12-31|2017-12-19|Morphotrust Usa, Llc|Detecting eye corners|
JP6592940B2|2015-04-07|2019-10-23|ソニー株式会社|Information processing apparatus, information processing method, and program|
JP6573193B2|2015-07-03|2019-09-11|パナソニックIpマネジメント株式会社|Determination device, determination method, and determination program|
US9864901B2|2015-09-15|2018-01-09|Google Llc|Feature detection and masking in images based on color distributions|
KR20170053435A|2015-11-06|2017-05-16|삼성전자주식회사|Electronic device and method of editting an image in the electronic device|
US10198819B2|2015-11-30|2019-02-05|Snap Inc.|Image segmentation and modification of a video stream|
US9984282B2|2015-12-10|2018-05-29|Perfect Corp.|Systems and methods for distinguishing facial features for cosmetic application|
US9886647B1|2015-12-30|2018-02-06|Snap Inc.|Image segmentation for object modeling|
EP3414538A1|2016-02-08|2018-12-19|Equality Cosmetics, Inc.|Apparatus and method for formulation and dispensing of visually customized cosmetics|
CN105787442B|2016-02-19|2019-04-30|电子科技大学|A kind of wearable auxiliary system and its application method of the view-based access control model interaction towards disturbance people|
EP3423990A1|2016-03-02|2019-01-09|Holition Limited|Locating and augmenting object features in images|
US20170263031A1|2016-03-09|2017-09-14|Trendage, Inc.|Body visualization system|
US11134848B2|2016-04-25|2021-10-05|Samsung Electronics Co., Ltd.|Mobile hyperspectral camera system and human skin monitoring using a mobile hyperspectral camera system|
USD835137S1|2016-05-11|2018-12-04|Benefit Cosmetics Llc|Display screen or portion thereof with animated graphical user interface|
USD835135S1|2016-05-11|2018-12-04|Benefit Cosmetics Llc|Display screen or portion thereof with animated graphical user interface|
USD835136S1|2016-05-11|2018-12-04|Benefit Cosmetics Llc|Display screen or portion thereof with animated graphical user interface|
JP6731616B2|2016-06-10|2020-07-29|パナソニックIpマネジメント株式会社|Virtual makeup device, virtual makeup method, and virtual makeup program|
TWI573093B|2016-06-14|2017-03-01|Asustek Comp Inc|Method of establishing virtual makeup data, electronic device having method of establishing virtual makeup data and non-transitory computer readable storage medium thereof|
CA2963108A1|2016-06-29|2017-12-29|EyesMatch Ltd.|System and method for digital makeup mirror|
JP6872742B2|2016-06-30|2021-05-19|学校法人明治大学|Face image processing system, face image processing method and face image processing program|
JP6448869B2|2016-08-05|2019-01-09|株式会社オプティム|Image processing apparatus, image processing system, and program|
WO2018033137A1|2016-08-19|2018-02-22|北京市商汤科技开发有限公司|Method, apparatus, and electronic device for displaying service object in video image|
US10600226B2|2016-09-07|2020-03-24|The University Of Hong Kong|System and method for manipulating a facial image and a system for animating a facial image|
US10755459B2|2016-10-19|2020-08-25|Adobe Inc.|Object painting through use of perspectives or transfers in a digital medium environment|
CN108021308A|2016-10-28|2018-05-11|中兴通讯股份有限公司|Image processing method, device and terminal|
CN108734070A|2017-04-24|2018-11-02|丽宝大数据股份有限公司|Blush guidance device and method|
CN108804975A|2017-04-27|2018-11-13|丽宝大数据股份有限公司|Lip gloss guidance device and method|
US10430987B1|2017-06-09|2019-10-01|Snap Inc.|Annotating an image with a texture fill|
EP3652701A4|2017-07-13|2021-11-03|Shiseido Company, Limited|Virtual facial makeup removal, fast facial detection and landmark tracking|
CN109299714A|2017-07-25|2019-02-01|上海中科顶信医学影像科技有限公司|ROI template generation method, ROI extracting method and system, equipment, medium|
CN109299636A|2017-07-25|2019-02-01|丽宝大数据股份有限公司|The biological information analytical equipment in signable blush region|
CN109426767A|2017-08-24|2019-03-05|丽宝大数据股份有限公司|Informer describes guidance device and its method|
CN109427075A|2017-08-24|2019-03-05|丽宝大数据股份有限公司|Biological information analytical equipment and its eye shadow analysis method|
CN109427078A|2017-08-24|2019-03-05|丽宝大数据股份有限公司|Biological information analytical equipment and its lip adornment analysis method|
US10452263B2|2017-09-13|2019-10-22|Biosense WebsterLtd.|Patient face as touchpad user interface|
CN109508581A|2017-09-15|2019-03-22|丽宝大数据股份有限公司|Biological information analytical equipment and its blush analysis method|
CN109508587A|2017-09-15|2019-03-22|丽宝大数据股份有限公司|Biological information analytical equipment and its bottom adornment analysis method|
CN107818305B|2017-10-31|2020-09-22|Oppo广东移动通信有限公司|Image processing method, image processing device, electronic equipment and computer readable storage medium|
CN107895343B|2017-12-31|2021-02-23|广州二元科技有限公司|Image processing method for quickly and simply blush based on facial feature positioning|
US10691932B2|2018-02-06|2020-06-23|Perfect Corp.|Systems and methods for generating and analyzing user behavior metrics during makeup consultation sessions|
US10395436B1|2018-03-13|2019-08-27|Perfect Corp.|Systems and methods for virtual application of makeup effects with adjustable orientation view|
US10762665B2|2018-05-23|2020-09-01|Perfect Corp.|Systems and methods for performing virtual application of makeup effects based on a source image|
CN110580676A|2018-06-07|2019-12-17|富泰华工业(深圳)有限公司|method for making cartoon image on human face, electronic device and storage medium|
US10575623B2|2018-06-29|2020-03-03|Sephora USA, Inc.|Color capture system and device|
JP2020098291A|2018-12-19|2020-06-25|カシオ計算機株式会社|Display device, display method, and program|
TWI708183B|2019-03-29|2020-10-21|麗寶大數據股份有限公司|Personalized makeup information recommendation method|
CN110223218A|2019-05-16|2019-09-10|北京达佳互联信息技术有限公司|Face image processing process, device, electronic equipment and storage medium|
TW202044138A|2019-05-24|2020-12-01|麗寶大數據股份有限公司|Method of automatically recording cosmetology procedure|
WO2021150880A1|2020-01-22|2021-07-29|Stayhealthy, Inc.|Augmented reality custom face filter|
法律状态:
2015-05-12| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-02-04| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-07-28| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2020-12-01| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 28/12/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/338,554|US8908904B2|2011-12-28|2011-12-28|Method and system for make-up simulation on portable devices having digital cameras|
US13/338,554|2011-12-28|
[返回顶部]